49 research outputs found

    Malleable coding for updatable cloud caching

    Full text link
    In software-as-a-service applications provisioned through cloud computing, locally cached data are often modified with updates from new versions. In some cases, with each edit, one may want to preserve both the original and new versions. In this paper, we focus on cases in which only the latest version must be preserved. Furthermore, it is desirable for the data to not only be compressed but to also be easily modified during updates, since representing information and modifying the representation both incur cost. We examine whether it is possible to have both compression efficiency and ease of alteration, in order to promote codeword reuse. In other words, we study the feasibility of a malleable and efficient coding scheme. The tradeoff between compression efficiency and malleability cost-the difficulty of synchronizing compressed versions-is measured as the length of a reused prefix portion. The region of achievable rates and malleability is found. Drawing from prior work on common information problems, we show that efficient data compression may not be the best engineering design principle when storing software-as-a-service data. In the general case, goals of efficiency and malleability are fundamentally in conflict.This work was supported in part by an NSF Graduate Research Fellowship (LRV), Grant CCR-0325774, and Grant CCF-0729069. This work was presented at the 2011 IEEE International Symposium on Information Theory [1] and the 2014 IEEE International Conference on Cloud Engineering [2]. The associate editor coordinating the review of this paper and approving it for publication was R. Thobaben. (CCR-0325774 - NSF Graduate Research Fellowship; CCF-0729069 - NSF Graduate Research Fellowship)Accepted manuscrip

    On palimpsests in neural memory: an information theory viewpoint

    Full text link
    The finite capacity of neural memory and the reconsolidation phenomenon suggest it is important to be able to update stored information as in a palimpsest, where new information overwrites old information. Moreover, changing information in memory is metabolically costly. In this paper, we suggest that information-theoretic approaches may inform the fundamental limits in constructing such a memory system. In particular, we define malleable coding, that considers not only representation length but also ease of representation update, thereby encouraging some form of recycling to convert an old codeword into a new one. Malleability cost is the difficulty of synchronizing compressed versions, and malleable codes are of particular interest when representing information and modifying the representation are both expensive. We examine the tradeoff between compression efficiency and malleability cost, under a malleability metric defined with respect to a string edit distance. This introduces a metric topology to the compressed domain. We characterize the exact set of achievable rates and malleability as the solution of a subgraph isomorphism problem. This is all done within the optimization approach to biology framework.Accepted manuscrip

    Malleable Coding with Fixed Reuse

    Full text link
    In cloud computing, storage area networks, remote backup storage, and similar settings, stored data is modified with updates from new versions. Representing information and modifying the representation are both expensive. Therefore it is desirable for the data to not only be compressed but to also be easily modified during updates. A malleable coding scheme considers both compression efficiency and ease of alteration, promoting codeword reuse. We examine the trade-off between compression efficiency and malleability cost-the difficulty of synchronizing compressed versions-measured as the length of a reused prefix portion. Through a coding theorem, the region of achievable rates and malleability is expressed as a single-letter optimization. Relationships to common information problems are also described

    Economical sampling of parametric signals

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 107-115).This thesis proposes architectures and algorithms for digital acquisition of parametric signals. It furthermore provides bounds for the performance of these systems in the presence of noise. Our simple acquisition circuitry and low sampling rate enable accurate parameter estimation to be achieved economically. In present practice, sampling and estimation are not integrated: the sampling device does not take advantage of the parametric model, and the estimation assumes that noise in the data is signal-independent additive white Gaussian noise. We focus on estimating the timing information in signals that are linear combinations of scales and shifts of a known pulse. This signal model is well-known in a variety of disciplines such as ultra-wideband signaling, neurobiology, etc. The signal is completely determined by the amplitudes and shifts of the summands. The delays determine a subspace that contains the signals, so estimating the shifts is equivalent to subspace estimation. By contrast, conventional sampling theory yields a least-squares approximation to a signal from a fixed shift-invariant subspace of possible reconstructions. Conventional acquisition takes samples at a rate higher than twice the signal bandwidth.(cont.) Although this may be feasible, there is a trade-off between power, accuracy, and speed. Under the signal model of interest, when the pulses are very narrow, the number of parameters per unit time-the rate of innovation-is much lower than the Fourier bandwidth. There is thus potential for much lower sampling rate so long as nonlinear reconstruction algorithms are used. We present a new sampling scheme that takes simultaneous samples at the outputs of multiple channels. This new scheme can be implemented with simple circuitry and has a successive approximation property that can be used to detect undermodeling. In many regimes our algorithms provide better timing accuracy and resolution than conventional systems. Our new analytical and algorithmic techniques are applied to previously proposed systems, and it is shown that all the systems considered have super-resolution properties. Finally, we consider the same parameter estimation problem when the sampling instances are perturbed by signal-independent timing noise. We give an iterative algorithm that achieves accurate timing estimation by exploiting knowledge of the pulse shape.by Julius Kusuma.Ph.D

    Peningkatan Koefisien Serap Bunyi Pada Peredam Akustik Dengan Penambahan Acoustic Metamaterial

    Get PDF
    Penelitian ini ditujukan untuk mengetahui pengaruh sisipan metamaterial terhadap perubahan koefisien serap peredam akustik. Peredam yang digunakan berbentuk kubus yang memiliki dimensi 5 cm Ă— 5 cm Ă— 5 cm dan terbuat dari kertas yellow board ketebalan 0,2 cm dengan dengan modifikasi permukaan berbentuk prisma. Jumlah elemen prisma di permukaan peredam divariasi 2 prisma, 5 prisma dan 7 prisma mengacu pada konfigurasi dasar Quadratic Residue Diffuser (QRD) namun dalam penelitian ini mengganti konfigurasi 3 elemen dengan 2 elemen. Struktur metamaterial yang digunakan berupa tubular metamaterial berdiameter 2 mm dan 3 mm dengan variasi panjang 20 mm dan 30 mm. Selain itu dilakukan pula pengujian terhadap konfigurasi struktur metamaterial dengan mengacu struktur kristal simple cubic (SC) dan heksagonal. Hasil pengujian dengan seperangkat tabung impedansi B&K menunjukan adanya peningkatan koefisien serap bunyi dengan adanya penambahan metamaterial. Hasil terbaik diperoleh pada sampel peredam 7 prisma terbuka dengan sisipan sampel metamaterial P20D3 dengan nilai koefisien serap diatas 0,8 pada frekuensi 600 Hz hingga 1400 Hz. Sementara hasil pengujian terhadap konfigurasi metamaterial, struktur heksagonal menunjukan pengingkatan koefisien serap pada frekuensi 1250 Hz dibanding SC. Kata kunci : peredam akustik, elemen prisma, tubular metamaterial, koefisien-sera

    Sustainable Concrete via Bayesian Optimization

    Full text link
    Eight percent of global carbon dioxide emissions can be attributed to the production of cement, the main component of concrete, which is also the dominant source of CO2 emissions in the construction of data centers. The discovery of lower-carbon concrete formulae is therefore of high significance for sustainability. However, experimenting with new concrete formulae is time consuming and labor intensive, as one usually has to wait to record the concrete's 28-day compressive strength, a quantity whose measurement can by its definition not be accelerated. This provides an opportunity for experimental design methodology like Bayesian Optimization (BO) to accelerate the search for strong and sustainable concrete formulae. Herein, we 1) propose modeling steps that make concrete strength amenable to be predicted accurately by a Gaussian process model with relatively few measurements, 2) formulate the search for sustainable concrete as a multi-objective optimization problem, and 3) leverage the proposed model to carry out multi-objective BO with real-world strength measurements of the algorithmically proposed mixes. Our experimental results show improved trade-offs between the mixtures' global warming potential (GWP) and their associated compressive strengths, compared to mixes based on current industry practices. Our methods are open-sourced at github.com/facebookresearch/SustainableConcrete.Comment: NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real Worl

    Low-Sampling Rate UWB Channel Characterization and Synchronization

    Get PDF
    We consider the problem of low-sampling rate high-resolution channel estimation and timing for digital ultra-wideband (UWB) receivers. We extend some of our recent results in sampling of certain classes of parametric non-bandlimited signals and develop a frequency domain method for channel estimation and synchronization in ultra-wideband systems, which uses sub-Nyquist uniform sampling and well-studied computational procedures. In particular, the proposed method can be used for identification of more realistic channel models, where different propagation paths undergo different frequency-selective fading. Moreover, we show that it is possible to obtain high-resolution estimates of all relevant channel parameters by sampling a received signal below the traditional Nyquist rate. Our approach leads to faster acquisition compared to current digital solutions, allows for slower A/D converters, and potentially reduces power consumption of digital UWB receivers significantly

    Low-Sampling Rate UWB Channel Characterization and Synchronization

    Get PDF
    We consider the problem of low-sampling rate high-resolution channel estimation and timing for digital ultra-wideband (UWB) receivers. We extend some of our recent results in sampling of certain classes of parametric nonbandlimited signals and develop a frequency domain method for channel estimation and synchronization in ultra-wideband systems, which uses sub-Nyquist uniform sampling and wellstudied computational procedures. In particular, the proposed method can be used for identification of more realistic channel models, where di#erent propagation paths undergo di#erent frequency-selective fading. Moreover, we show that it is possible to obtain high-resolution estimates of all relevant channel parameters by sampling a received signal below the traditional Nyquist rate. Our approach leads to faster acquisition compared to current digital solutions, allows for slower A/D converters, and potentially reduces power consumption of digital UWB receivers significantly

    Sampling With Finite Rate of Innovation: Channel and Timing Estimation for UWB and GPS

    Get PDF
    Abstract—In this work, we consider the problem of channel estimation by using the recently developed theory for sampling of signals with a finite rate of innovation [1]. We show a framework which allows for lower than Nyquist rate sampling applicable for timing and channel estimation of both narrowband and wideband channels. In certain cases we demonstrate performance exceeding that of algorithms using Nyquist rate sampling while working at lower sampling rates, thus saving power and computational complexity

    Sampling of communications systems with bandwidth expansion

    Get PDF
    Many communication systems are {\em bandwidth-expanding}: the transmitted signal occupies a bandwidth larger than the {\em symbol rate}. The sampling theorems of Kotelnikov, Shannon, Nyquist et al. shows that in order to represent a bandlimited signal, it is necessary to sample at what is popularly referred to as the Shannon or Nyquist rate. However, in many systems, the required sampling rate is very high and expensive to implement. In this work we show that it is possible to get suboptimal performance by sampling close to the {\em symbol rate} of the signal, using well-studied algorithmic components. This work is based on recent results on sampling for some classes of non-bandlimited signals. In the present paper, we extend these sampling results to the case when there is noise. In our exposition, we use Ultra Wideband (UWB) signals as an example of how our framework can be applied
    corecore